I have a playbook that creates and ec2 instance, copies a few files over to the instance and then runs some shell commands on the instance.
The issue is that I want to be able to specify what ssh key ansible uses for the copy and shell tasks I am running and make sure it does not attempt to use this key for the other tasks, which run on the localhost. Here is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
vars:
# CentOS 7 x86_64 Devel AtomicHost EBS HVM 20150306_01 (ami-07e6c437)
# for us-west-2
- ami: 'ami-07e6c437'
- key_pair: 'my-key'
tasks:
- name: Create a centos server
ec2:
region: 'us-west-2'
key_name: '{{ key_pair }}'
group: default
instance_type: t2.micro
image: '{{ ami }}'
wait: true
exact_count: 1
count_tag:
Name: my-instance
instance_tags:
Name: my-instance
register: ec2
# shows the json data for the instances created
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: Wait for SSH to come up
wait_for: host={{ ec2['tagged_instances'][0]['public_ip'] }} port=22 delay=1 timeout=480 state=started
- name: Accept new ssh fingerprints
shell: ssh-keyscan -H "{{ ec2['tagged_instances'][0]['public_ip'] }}" >> ~/.ssh/known_hosts
# THE TASKS I NEED HELP ON
- name: Copy files over to ec2 instance
remote_user: centos
copy: src={{ item }} dest=/home/centos/ mode=600
with_fileglob:
- my-files/*
delegate_to: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
# THE TASKS I NEED HELP ON
- name: run commands
remote_user: centos
shell: "{{ item }}"
delegate_to: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
with_items:
- "sudo yum update -y"
- "sudo yum install nmap ruby"
ignore_errors: true
Yeah, I agree with #techraf. But the answer to the question you posted is that you have to dynamically change your inventory for the new instance that you provisioned and then run remote ansible plays on that new host. So you would add this to the end of your first play:
- local_action:
module: add_host
hostname: newhost
ansible_host: "{{ ec2['tagged_instances'][0]['public_ip'] }}"
ansible_user: centos
ansible_ssh_private_key_file: /path/to/keyfile
###### New play
- name: Configure my new instance!
hosts: newhost
tasks:
# THE TASKS I NEED HELP ON
- name: Copy files over to ec2 instance
copy: src={{ item }} dest=/home/centos/ mode=600
with_fileglob:
- my-files/*
# Use the yum module here instead, much easier
- name: run commands
shell: "{{ item }}"
with_items:
- "sudo yum update -y"
- "sudo yum install nmap ruby"
ignore_errors: true
Edit: Adding, that you can always just set the ssh host key by using:
- set_fact: ansible_ssh_private_key_file=/path/to/keyfile
with the caveat that the above set_fact will only change the ssh private key file for the currently running host (e.g., for localhost on your example play above).
Related
I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
# Download
redis_version: "6.2.3"
redis_download_url: "https://download.redis.io/releases/redis-{{ redis_version }}.tar.gz"
When running my simple role/playbook.yml
---
- hosts: all
become: true
tasks:
- include: tasks/main.yml
Linked to tasks/main.yml
---
- name: Check ansible version
assert:
that: "ansible_version.full is version_compare('2.4', '>=')"
msg: "Please use Ansible 2.4 or later"
- include: download.yml
tags:
- download
- include: install.yml
tags:
- install
It should pull the tar file from tasks/download.yml as stated:
---
- name: Download Redis
get_url:
url: "{{ redis_download_url }}"
dest: /usr/local/src/redis-{{ redis_version }}.tar.gz
- name: Extract Redis tarball
unarchive:
src: /usr/local/src/redis-{{ redis_version }}.tar.gz
dest: /usr/local/src
creates: /usr/local/src/redis-{{ redis_version }}/Makefile
copy: no
The redis_download_url var is defined in defaults/main.yml which as I understand ansible should be able to locate there. I also have similar vars defined in defaults/task.yml eg.
redis_user: redis
redis_group: "{{ redis_user }}"
redis_port: "6379"
redis_root_dir: "/opt/redis"
redis_config_dir: "/etc/redis"
redis_conf_file: "{{ redis_config_dir }}/{{ redis_port }}.conf"
redis_password: "change-me"
redis_protected_mode: "yes"
and I assume they are also not able to be found/seen by ansible (but it does not get that far). I have also checked all file permissions and they seem to be fine.
Apologies in advance if the question is badly formatted.
As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml, you'll need to use include_role or roles: [].
- hosts: all
become: true
tasks:
- include_role:
name: myrole
OR
- hosts: all
become: true
roles:
- myrole
Concerning: Ansible-Playbooks
Is it possible to run a command on the remote machine and store the resulting output into a variable?
I am trying to get the kernel version and install the matching headers like this:
---
- hosts: all
tasks:
- name: install headers
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-`uname -r`
#- a lot of other packages here
Unfortunately uname -r is not executed here.
I am aware of this question: Ansible: Store command's stdout in new variable?
But it looks like this is another topic.
By definition:
Ansible facts are data related to your remote systems, including
operating systems, IP addresses, attached filesystems, and more.
In this link you can see all ansible facts that you can use.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_vars_facts.html
One of those variables is ansible_kernel, this is the version of the kernel of your remote system. By default ansible gets this variables, but if you want to be sure that ansible will get this variables you have to use gather_facts: yes.
---
- hosts: all
gather_facts: yes
tasks:
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ ansible_kernel }}
I found a solution but I am not sure if this is really elegant
---
- hosts: all
tasks:
- name: Getting Kernel Version
command: "uname -r"
register: kernel_version
- name: install
become: yes
become_method: sudo
apt:
name: "{{ packages }}"
vars:
packages:
- linux-headers-{{ kernel_version.stdout }}
how can I use with_dict by extra_vars?
I try I know everything but all output with_dict expects a dict :(
This is all files
# vars.yml
rd1:
Terry:
user_name:terry_liu
user_birth:1994/05/11
Cary:
user_name:cary_lin
user_birth:1992/02/19
rd6:
Jessie:
user_name:jessie_chen
user_birth:1996/11/20
Sherry:
user_name:sherry_hsu
user_birth:1989/07/23
-
# test.yml
- name: demo
hosts: test
vars_files:
- vars.yml
tasks:
- name: show data
debug:
msg: "{{ item }}"
with_dict: "{{ dep }}"
-
#command
ansible-playbook -i inventory test.yml --extra-vars 'dep=rd1'
-
Inventory's host is my test vm, just have an ip and it can be ssh.
When run command, it output: fatal: [172.16.1.227]: FAILED! => {"msg": "with_dict expects a dict"}
I think it's need var in var, but I try many different way, all fail.
My demand is send a float dep var and get correspond data from vars.yml.
Thanks all, have a good day!
The problem is that "{{ dep }}" evaluates to the string "rd1"
with_dict: "{{ dep }}"
This is the reason for the error "with_dict expects a dict".
Instead, you need lookup and vars plugin. For example
with_dict: "{{ lookup('vars', dep) }}"
Im currently querying multiple databases and capturing the results of the query
The way Im doing it is, Im writing a task which copies a shell script, something like below
#!/bin/bash
source $HOME/bin/gsd_xenv $1 &> /dev/null
sqlplus -s <<EOF
/ as sysdba
set heading off
select d.name||','||i.instance_name||','||i.host_name||';' from v\$database d,v\$instance i;
EOF
In the playbook, Im writing the task as below:
- name: List Query [Host and DB]
shell: "/tmp/sqlscript/sql_select.sh {{item}} >> /tmp/sqlscript/output.out"
become: yes
become_method: sudo
become_user: oracle
environment:
PATH: "/home/oracle/bin:/usr/orasys/12.1.0.2r10/bin:/usr/bin:/bin:/usr/ucb:/sbin:/usr/sbin:/etc:/usr/local/bin:/oradata/epdmat/goldengate/config/sys"
ORACLE_HOME: "/usr/orasys/12.1.0.2r10"
with_items: "{{ factor_dbs.split('\n') }}"
However I have noticed that the different hosts have different ORACLE_HOME and PATHS. How can I define those variables in the playbook, so that the task picks the right ORACLE_HOME and PATH variables and execute the task successfully
you can define host specific variables for each of the hosts. You can write your inventory file like:
[is_hosts]
greenhat ORACLE_HOME=/tmp
localhost ORACLE_HOME=/sbin
similarly for the PATH variable
then your task:
sample playbook that demonstrates the results:
- hosts: is_hosts
gather_facts: false
vars:
tasks:
- name: task 1
shell: "env | grep -e PATH -e ORACLE_HOME"
environment:
# PATH: "{{ hostvars[inventory_hostname]['PATH']}}"
ORACLE_HOME: "{{ hostvars[inventory_hostname]['ORACLE_HOME'] }}"
register: shell_output
- name: print results
debug:
var: shell_output.stdout_lines
sample output, you can see ORACLE_HOME variable was indeed changed, and as defined per host:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [greenhat] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/tmp",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
ok: [localhost] => {
"shell_output.stdout_lines": [
"ORACLE_HOME=/sbin",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin"
]
}
I have an ansible playbook to kill running processes and works great most of the time!, however, from time to time we find processes that just can't be killed so, "wait_for" gets to the timeout, throws an error and it stops the process.
The current workaround is to manually go into the box, use "kill -9" and run the ansible playbook again so I was wondering if there is any way to handle this scenario from ansible itself?, I mean, I don't want to use kill -9 from the beginning but I maybe a way to handle the timeout?, even to use kill -9 only if process hasn't been killed in 300 seconds? but what would be the best way to do it?
These are the tasks I currently have:
- name: Get running processes
shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'"
register: running_processes
- name: Kill running processes
shell: "kill {{ item }}"
with_items: "{{ running_processes.stdout_lines }}"
- name: Waiting until all running processes are killed
wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ running_processes.stdout_lines }}"
Thanks!
You could ignore errors on wait_for and register the result to force kill failed items:
- name: Get running processes
shell: "ps -ef | grep -v grep | grep -w {{ PROCESS }} | awk '{print $2}'"
register: running_processes
- name: Kill running processes
shell: "kill {{ item }}"
with_items: "{{ running_processes.stdout_lines }}"
- wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ running_processes.stdout_lines }}"
ignore_errors: yes
register: killed_processes
- name: Force kill stuck processes
shell: "kill -9 {{ item }}"
with_items: "{{ killed_processes.results | select('failed') | map(attribute='item') | list }}"
Use pkill(1) instead of grep+kill.
The killall command has a wait option that might be useful.
Install psmisc:
tasks:
- apt: pkg=psmisc state=present
Then use killall like this:
tasks:
- shell: "killall screen --wait"
ignore_errors: true # In case there is no process
-w, --wait:
Wait for all killed processes to die. killall checks once per second if any of the killed processes still exist and only returns if none are left.
After some investigations and struggling with ansible_linter, below an alternative way (eg. without use of ignore_error)
- name: "disable and stop PROCESS"
service:
name: PROCESS
enabled: no
state: stopped
tags:
- stop
- name: "evaluate running PROCESS processes from remote host"
command: "/usr/bin/pgrep PROCESS"
register: running_PROCESS
failed_when: running_PROCESS.rc > 1
changed_when:
- running_PROCESS.rc == 0
tags:
- stop
- name: "pkill PROCESS running processes"
command: "/usr/bin/pkill PROCESS"
register: kill_PROCESS
changed_when: kill_PROCESS.rc == 0
when: running_PROCESS.stdout_lines | length > 0
tags:
- stop
- name: "check PROCESS processes"
wait_for:
path: "/proc/{{ item }}/status"
state: absent
with_items: "{{ running_PROCESS.stdout_lines }}"
when: running_PROCESS.stdout_lines | length > 0
tags:
- stop
- name: "force kill stuck PROCESS processes"
command: "/usr/bin/pkill -9 PROCESS"
register: kill9_PROCESS
changed_when: kill9_PROCESS.rc == 0
failed_when: kill9_PROCESS.rc > 1
when:
- running_PROCESS.stdout_lines | length > 0
tags:
- stop