ansible : how to pass multiple commands - module

I tried this:
- command: ./configure chdir=/src/package/
- command: /usr/bin/make chdir=/src/package/
- command: /usr/bin/make install chdir=/src/package/
which works, but I was hoping for something neater.
So I tried this:
from: https://stackoverflow.com/questions/24043561/multiple-commands-in-the-same-line-for-bruker-topspin which give me back "no such file or directory"
- command: ./configure;/usr/bin/make;/usr/bin/make install chdir=/src/package/
I tried this too: https://u.osu.edu/hasnan.1/2013/12/16/ansible-run-multiple-commands-using-command-module-and-with-items/
but I couldn't find the right syntax to put:
- command: "{{ item }}" chdir=/src/package/
with_items:
./configure
/usr/bin/make
/usr/bin/make install
That does not work, saying there is a quote issue.

To run multiple shell commands with ansible you can use the shell module with a multi-line string (note the pipe after shell:), as shown in this example:
- name: Build nginx
shell: |
cd nginx-1.11.13
sudo ./configure
sudo make
sudo make install

If a value in YAML begins with a curly brace ({), the YAML parser assumes that it is a dictionary. So, for cases like this where there is a (Jinja2) variable in the value, one of the following two strategies needs to be adopted to avoiding confusing the YAML parser:
Quote the whole command:
- command: "{{ item }} chdir=/src/package/"
with_items:
- ./configure
- /usr/bin/make
- /usr/bin/make install
or change the order of the arguments:
- command: chdir=/src/package/ {{ item }}
with_items:
- ./configure
- /usr/bin/make
- /usr/bin/make install
Thanks for #RamondelaFuente alternative suggestion.

Shell works for me.
Simply to say, Shell is the same as you run a shell script.
Notes:
Make sure use | when running multiple cmds.
Shell won't return errors if the last cmd is success (just like normal shell)
Control it with exit 0/1 if you want to stop ansible when error occurs.
The following example shows an error in shell, but it's success at the end of the execution.
- name: test shell with an error
become: no
shell: |
rm -f /test1 # This should be an error.
echo "test2"
echo "test1"
echo "test3" # success
This example shows stopinng shell with exit 1 error.
- name: test shell with exit 1
become: no
shell: |
rm -f /test1 # This should be an error.
echo "test2"
exit 1 # this stops ansible due to returning an error
echo "test1"
echo "test3" # success
reference:
https://docs.ansible.com/ansible/latest/modules/shell_module.html

You can also do like this:
- command: "{{ item }}"
args:
chdir: "/src/package/"
with_items:
- "./configure"
- "/usr/bin/make"
- "/usr/bin/make install"
Hope that might help other

Here is worker like this. \o/
- name: "Exec items"
shell: "{{ item }}"
with_items:
- echo "hello"
- echo "hello2"

I faced the same issue. In my case, part of my variables were in a dictionary i.e. with_dict variable (looping) and I had to run 3 commands on each item.key. This solution is more relevant where you have to use with_dict dictionary with running multiple commands (without requiring with_items)
Using with_dict and with_items in one task didn't help as it was not resolving the variables.
My task was like:
- name: Make install git source
command: "{{ item }}"
with_items:
- cd {{ tools_dir }}/{{ item.value.artifact_dir }}
- make prefix={{ tools_dir }}/{{ item.value.artifact_dir }} all
- make prefix={{ tools_dir }}/{{ item.value.artifact_dir }} install
with_dict: "{{ git_versions }}"
roles/git/defaults/main.yml was:
---
tool: git
default_git: git_2_6_3
git_versions:
git_2_6_3:
git_tar_name: git-2.6.3.tar.gz
git_tar_dir: git-2.6.3
git_tar_url: https://www.kernel.org/pub/software/scm/git/git-2.6.3.tar.gz
The above resulted in an error similar to the following for each {{ item }} (for 3 commands as mentioned above). As you see, the values of tools_dir is not populated (tools_dir is a variable which is defined in a common role's defaults/main.yml and also item.value.git_tar_dir value was not populated/resolved).
failed: [server01.poc.jenkins] => (item=cd {# tools_dir #}/{# item.value.git_tar_dir #}) => {"cmd": "cd '{#' tools_dir '#}/{#' item.value.git_tar_dir '#}'", "failed": true, "item": "cd {# tools_dir #}/{# item.value.git_tar_dir #}", "rc": 2}
msg: [Errno 2] No such file or directory
Solution was easy. Instead of using "COMMAND" module in Ansible, I used "Shell" module and created a a variable in roles/git/defaults/main.yml
So, now roles/git/defaults/main.yml looks like:
---
tool: git
default_git: git_2_6_3
git_versions:
git_2_6_3:
git_tar_name: git-2.6.3.tar.gz
git_tar_dir: git-2.6.3
git_tar_url: https://www.kernel.org/pub/software/scm/git/git-2.6.3.tar.gz
#git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make prefix={{ tools_dir }}/{{ item.value.git_tar_dir }} all && make prefix={{ tools_dir }}/{{ item.value.git_tar_dir }} install"
#or use this if you want git installation to work in ~/tools/git-x.x.x
git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make prefix=`pwd` all && make prefix=`pwd` install"
#or use this if you want git installation to use the default prefix during make
#git_pre_requisites_install_cmds: "cd {{ tools_dir }}/{{ item.value.git_tar_dir }} && make all && make install"
and the task roles/git/tasks/main.yml looks like:
- name: Make install from git source
shell: "{{ git_pre_requisites_install_cmds }}"
become_user: "{{ build_user }}"
with_dict: "{{ git_versions }}"
tags:
- koba
This time, the values got successfully substituted as the module was "SHELL" and ansible output echoed the correct values. This didn't require with_items: loop.
"cmd": "cd ~/tools/git-2.6.3 && make prefix=/home/giga/tools/git-2.6.3 all && make prefix=/home/giga/tools/git-2.6.3 install",

Related

ansible debug msg failed_when and changed_when

y code -
# Update apt cache reposoitary
- name: Update apt cache
command: apt-get update
register: return
# Debug msg
- debug:
msg: '{{ return }}'
result.stdout_lines
"stdout_lines": [
"Hit:1 http://server/pub/mirrors/ubuntu bionic InRelease",
"Get:2 http://server/pub/mirrors/ubuntu bionic-updates InRelease [88.7 kB]",
"Get:3 http://server/pub/mirrors/ubuntu bionic-backports InRelease [74.6 kB]",
"Hit:4 https://download.package.com/infrastructure_agent/linux/apt bionic InRelease",
"Hit:5 https://archive.repo.package.com/apt/ubuntu/18.04/amd64/2018.3 bionic InRelease",
"Get:6 http://server/pub/mirrors/ubuntu bionic-security InRelease [88.7 kB]",
"Hit:7 http://apt.package123.org/pub/repos/apt bionic-pgdg InRelease",
"Fetched 252 kB in 1s (222 kB/s)",
"Reading package lists..."
I would like consider if any one line has 2 string "Err" & "package" that means it has failed to update apt cache from website - https://download.package.com/
I was thinking of something like below:
changed_when: >
(("Get" in return.stdout_lines) and
("package" in ret.stdout_lines)) or
(("Hit" in return.stdout_lines) and
("package" in ret.stdout_lines))
failed_when: >
("Err" in return.stdout_lines) and
("package" in ret.stdout_lines)
Question here is does 2 strings looks for all of the lines or line by line ?
if so how to get this worked on line by line check.
When APT fails to download the package, it will return an error. In that case, stdout isn't used anymore, but stderr. So you most likely need to change it to:
# Update apt cache reposoitary
- name: Update apt cache
command: apt-get update
ignore_errors: true
register: return
# Debug msg
- debug:
msg: '{{ return }}'
changed_when: >
(("Get" in return.stderr_lines) and
("newrelic" in ret.stderr_lines)) or
(("Hit" in return.stderr_lines) and
("newrelic" in ret.stdout_lines))
failed_when: >
("Err" in return.stderr_lines) and
("newrelic" in ret.stderr_lines)
Question here is does 2 strings looks for all of the lines or line by line ?
A: It looks for all the 'text' in the stderr_lines output of that task.
If so how to get this worked on line by line check.
A: Your code looks sane.
Perhaps you want to test this specific use case yourself. What you could do is append the following to /etc/hosts on your target system: 127.0.0.1 download.newrelic.com
This way, APT looks on the target machine for the repository. Since it isn't there, the download will fail.
Also note that Ansible has update cache built-in the APT module.
Here you go working code -
-----------------
# Update apt cache reposoitary
- name: Update apt cache
command: apt-get update
register: return
failed_when: return.stdout_lines is search("Hit:* https://download.package.com")
changed_when: return.stdout_lines is search("Err:* https://download.package.com")
# Debug msg
- debug:
msg: '{{ return }}'
-----------------

Ansible how use with_dict by extra_vars?

how can I use with_dict by extra_vars?
I try I know everything but all output with_dict expects a dict :(
This is all files
# vars.yml
rd1:
Terry:
user_name:terry_liu
user_birth:1994/05/11
Cary:
user_name:cary_lin
user_birth:1992/02/19
rd6:
Jessie:
user_name:jessie_chen
user_birth:1996/11/20
Sherry:
user_name:sherry_hsu
user_birth:1989/07/23
-
# test.yml
- name: demo
hosts: test
vars_files:
- vars.yml
tasks:
- name: show data
debug:
msg: "{{ item }}"
with_dict: "{{ dep }}"
-
#command
ansible-playbook -i inventory test.yml --extra-vars 'dep=rd1'
-
Inventory's host is my test vm, just have an ip and it can be ssh.
When run command, it output: fatal: [172.16.1.227]: FAILED! => {"msg": "with_dict expects a dict"}
I think it's need var in var, but I try many different way, all fail.
My demand is send a float dep var and get correspond data from vars.yml.
Thanks all, have a good day!
The problem is that "{{ dep }}" evaluates to the string "rd1"
with_dict: "{{ dep }}"
This is the reason for the error "with_dict expects a dict".
Instead, you need lookup and vars plugin. For example
with_dict: "{{ lookup('vars', dep) }}"

Set different ORACLE_HOME and PATH environment variable using Ansible

Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}

How can I pass variable to ansible playbook in the command line?

How can one pass variable to ansible playbook in the command line?
The following command didn't work:
$ ansible-playbook -i '10.0.0.1,' yada-yada.yml --tags 'loaddata' django_fixtures="tile_colors"
Where django_fixtures is my variable.
Reading the docs I find the section Passing Variables On The Command Line, that gives this example:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
Others examples demonstrate how to load from JSON string (≥1.2) or file (≥1.3)
Other answers state how to pass in the command line variables but not how to access them, so if you do:
--extra-vars "version=1.23.45 other_variable=foo"
In your yml file you assign these to scoped ansible variables by doing something like:
vars:
my_version: "{{ version }}"
my_other_variable: {{ other_variable }}
An alternative to using command line args is to utilise environmental variables that are already defined within your session, you can reference these within your ansible yml files like this:
vars:
my_version: "{{ lookup('env', 'version') }}"
my_other_variable: {{ lookup('env', 'other_variable') }}
ansible-playbook release.yml -e "version=1.23.45 other_variable=foo"
For some reason none of the above Answers worked for me. As I need to pass several extra vars to my playbook in Ansbile 2.2.0, this is how I got it working (note the -e option before each var):
ansible-playbook site.yaml -i hostinv -e firstvar=false -e second_var=value2
You can use the --extra-vars option. See the docs
ansible-playbook test.yml --extra-vars "arg1=${var1} arg2=${var2}"
In the yml file you can use them like this
---
arg1: "{{ var1 }}"
arg2: "{{ var2 }}"
Also, --extra-vars and -e are the same, you can use one of them.
s3_sync:
bucket: ansible-harshika
file_root: "{{ pathoftsfiles }}"
validate_certs: false
mode: push
key_prefix: "{{ folder }}"
here the variables are being used named as 'pathoftsfiles' and 'folder'. Now the value to this variable can be given by the below command
sudo ansible-playbook multiadd.yml --extra-vars "pathoftsfiles=/opt/lampp/htdocs/video/uploads/tsfiles/$2 folder=nitesh"
Note: Don't use the inverted commas while passing the values to the variable in the shell command
This also worked for me if you want to use shell environment variables:
ansible-playbook -i "localhost," ldap.yaml --extra-vars="LDAP_HOST={{ lookup('env', 'LDAP_HOST') }} clustername=mycluster env=dev LDAP_USERNAME={{ lookup('env', 'LDAP_USERNAME') }} LDAP_PASSWORD={{ lookup('env', 'LDAP_PASSWORD') }}"
In Ansible, we can define variables when running our playbook by passing variables at the command line using the --extra-vars (or -e) argument.
Bellow are some ways to pass variables to an Ansible playbook in the command line:
Method 1: key=value format
ansible-playbook site.yml --extra-vars "arg1=demo1 arg2=demo2"
Method 2: JSON string format
ansible-playbook site.yml --extra-vars '{"arg1":"demo1","arg2":"demo2"}'
The site.yml playbook will be:
---
- name: ansible playbook to print external variables
hosts: localhost
connection: local
tasks:
- name: print values
ansible.builtin.debug:
msg: "variable1 = {{ arg1 }}, variable2 = {{ arg2 }}"
when: arg1 is defined and arg2 is defined
Method 3: Read from an external JSON file
If you have a lot of special characters, use a JSON or YAML file containing the variable definitions.
ansible-playbook site.yml --extra-vars "#vars.json"
The vars.json file:
{
arg1: "demo1",
arg2: "demo2"
}
ansible-playbook release.yml --extra-vars "username=hello password=bye"
#you can now use the above command anywhere in the playbook as an example below:
tasks:
- name: Create a new user in Linux
shell: useradd -m -p {{username}} {{password}}"
ansible-playbok -i <inventory> <playbook-name> -e "proc_name=sshd"
You can use the above command in below playbooks.
---
- name: Service Status
gather_facts: False
tasks:
- name: Check Service Status (Linux)
shell: pgrep "{{ proc_name }}"
register: service_status
ignore_errors: yes
debug: var=service_status.rc`

Ansible Do Task If Apt Package Is Missing

I'm looking to do a series of tasks if a specific apt package is missing.
for example:
if graphite-carbon is NOT installed do:
- apt: name=debconf-utils state=present
- shell: echo 'graphite-carbon/postrm_remove_databases boolean false' | debconf-set-selections
- apt: name=debconf-utils state=absent
another example:
if statsd is NOT installed do:
- file: path=/tmp/build state=directory
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
- shell: dpkg -i /tmp/build/statsd*.deb
How would I begin to crack this?
I'm thinking maybe I can do a -shell: dpkg -l|grep <package name> and capture the return code somehow.
You can use the package_facts module (requires Ansible 2.5):
- name: Gather package facts
package_facts:
manager: apt
- name: Install debconf-utils if graphite-carbon is absent
apt:
name: debconf-utils
state: present
when: '"graphite-carbon" not in ansible_facts.packages'
...
It looks like my solution is working.
This is an example of how I have it working:
- shell: dpkg-query -W 'statsd'
ignore_errors: True
register: is_statd
- name: create build dir
file: path=/tmp/build state=directory
when: is_statd|failed
- name: install dev packages for statd build
apt: name={{ item }}
with_items:
- git
- devscripts
- debhelper
when: is_statd|failed
- shell: cd /tmp/build ; git clone https://github.com/etsy/statsd.git ; cd statsd ; dpkg-buildpackage
when: is_statd|failed
....
Here is another example:
- name: test if create_superuser.sh exists
stat: path=/tmp/create_superuser.sh
ignore_errors: True
register: f
- name: create graphite superuser
command: /tmp/create_superuser.sh
when: f.stat.exists == True
...and one more
- stat: path=/tmp/build
ignore_errors: True
register: build_dir
- name: destroy build dir
shell: rm -fvR /tmp/build
when: build_dir.stat.isdir is defined and build_dir.stat.isdir
I think you're on the right track with the dpkg | grep, only that the return code will be 0 in any case. But you can simply check the output.
- shell: dpkg-query -l '<package name>'
register: dpkg_result
- do_something:
when: dpkg_result.stdout != ""
I'm a bit late to this party but here's another example that uses exit codes - ensure you explicitly match the desired status text in the dpkg-query results:
- name: Check if SystemD is installed
command: dpkg-query -s systemd | grep 'install ok installed'
register: dpkg_check
tags: ntp
- name: Update repositories cache & install SystemD if it is not installed
apt:
name: systemd
update_cache: yes
when: dpkg_check.rc == 1
tags: ntp