Trying to run script with different arguments on different servers using ansible, example:
server 192.168.0.1 -> fabric.sh generic1 status
server 192.168.0.2 -> fabric.sh generic2dr status
server 192.168.0.3 -> fabric.sh generic3 status
How to use variables in playbook below?
It works when I create group for each server, but it's not efficient
---
- hosts: esb
remote_user: root
tasks:
- name: Generic_1
become_user: esb
shell: "/home/fabric.sh generic1 status"
Host file:
[esb]
192.168.0.1
192.168.0.2
192.168.0.3
You can set per-host variables in your inventory. For example, modify your inventory so it looks like this:
[esb]
192.168.0.1 fabric_args="generic1 status"
192.168.0.2 fabric_args="generic2dr status"
192.168.0.3 fabric_args="generic3 status"
And then use the fabric_args variable in your playbook:
---
- hosts: esb
remote_user: root
tasks:
- name: Generic_1
become_user: esb
shell: "/home/fabric.sh {{ fabric_args }}"
For more information, read the Using Variables and Working with Inventory sections of the Ansible documentation.
Next to Basic inventory there is yaml – Uses a specific YAML file as an inventory source.
For example (fit the variables to your needs):
$ cat hosts
all:
hosts:
10.1.0.51:
10.1.0.52:
10.1.0.53:
vars:
ansible_connection: ssh
ansible_user: admin
ansible_become: yes
ansible_become_user: root
ansible_become_method: sudo
ansible_python_interpreter: /usr/local/bin/python3.6
ansible_perl_interpreter: /usr/local/bin/perl
children:
esb:
hosts:
10.1.0.51:
run_string: "fabric.sh generic1 status"
10.1.0.52:
run_string: "fabric.sh generic2dr status"
10.1.0.53:
run_string: "fabric.sh generic3 status"
The play below
- hosts: esb
tasks:
- debug:
var: run_string
gives (abridged):
ok: [10.1.0.51] => {
"run_string": "generic1 status"
}
ok: [10.1.0.52] => {
"run_string": "generic2dr status"
}
ok: [10.1.0.53] => {
"run_string": "generic3 status"
}
Related
I am trying to test something at home with the variables mechanism Ansible offers, which I am about to implement in one of my projects at work. So, been searching for a while now, but seems I can't get it working that easily, even with others` solutions here and there.
I will represent my project logic at work now, by demonstrating with my test directory & files structure at home. Here's the case, I have the following playbooks:
main.yaml
pl1.yaml
pl2.yaml
Contents of ./main.yaml:
- import_playbook: /home/martin/ansible/pl1.yaml
- import_playbook: /home/martin/ansible/pl2.yaml
Contents of ./pl1.yaml:
- name: Test playbook 1
hosts: localhost
tasks:
- name: Discovering the secret host
shell: cat /home/martin/secret
register: whichHostAd
- debug:
msg: "{{ whichHostAd.stdout }}"
- name: Discovering my hostname
shell: hostname
register: myHostnameAd
- set_fact:
whichHost: "{{ whichHostAd.stdout }}"
myHostname: "{{ myHostnameAd.stdout }}"
cacheable: yes
- name: Test playbook 1 part 2
hosts: "{{ hostvars['localhost']['ansible_facts']['whichHost'] }}"
tasks:
- name: Structuring info
shell: hostname
register: secretHostname
- name: Showing the secret hostname
debug:
msg: "{{ secretHostname.stdout }}"
Contents of ./pl2.yaml:
- name: Test Playbook 2
hosts: "{{ whichHost }}"
tasks:
- name: Finishing up
shell: echo "And here am i again.." && hostname
- name: Showing var myHostname
debug:
msg: "{{ myHostname.stdout }}"
The whole idea is to have a working variable on the go at the hosts field between the plays. How do we do that?
The playbook does not run at all if I won't define the whichHost variable as an extra arg, and that's ok, I can do it each time, but during the execution I would like to have that variable manageable and changeable. In the test case above, I want whichHost to be used everywhere across the plays/playbooks included in main.yaml, specifically to reflect the output of the first task in pl1.yaml (or the output of the whichHostAd.stdout variable), so I can determine the host I am about to target in pl2.yaml.
According to docs, I should be able to at least access it with hostvars (as in my playbook), but this is the output I get when I try the above example:
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'dict object' has no attribute 'whichHost'
The error appears to have been in '/home/martin/ansible/pl1.yaml': line 22, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Test playbook 1 part 2
^ here
set_fact also does not seem to be very helpful. Any help will be appreciated!
Ok, I've actually figured it out pretty fast.
So, we definitely need to have a fact task, holding the actual data/output:
- hosts: localhost
tasks:
- name: Saving variable
set_fact:
whichHost: "{{ whichHostAd.stdout }}"
After that, when you want to invoke the var in other hosts and plays, we have to provide the host and the fact:
"{{ hostvars['localhost']['whichHost'] }}"
Like in my test above, but without ['ansible_facts']
I'm stuck with using variable from tasks within roles in Ansible playbook. My playbook is following:
- hosts: server.com
gather_facts: yes
tasks:
- set_fact:
private_ip: "{{ item }}"
with_items: "{{ ansible_all_ipv4_addresses }}"
when: "item.startswith('10.')"
- debug: var=private_ip
roles:
- role: check-server
server_ip: 10.10.0.1
client_ip: "{{ private_ip }}
When pleybook is ran -debug shows correct IP inside the variable private_ip, but I can't make client_ip (from roles block) to get private_ip content. client_ip remains always undefined.
What sorcery can I apply here to have client_ip=$private_ip?
tasks are executed after roles are applied.
Change tasks to pre_tasks.
Besides, using set_fact in a loop is not the best practice. If you get the value you want, that's ok, I believe you verified it. But you should rather use (ansible_all_ipv4_addresses | select("match", "10\..*") | list)[0].
Why can't I access these Ansible file variables from within the Ansible task?
I've tried vars_files as well wit combinations of calling global.varname and global[varname]
- hosts: localhost
gather_facts: True
remote_user: root
- include_vars: site_vars.yml
tasks:
- digital_ocean:
state: present
command: droplet
unique_name: yes
name: do_16_04_common
api_token: "{{HOST_PROVIDER}}"
global_vars.yml:
global:
WORKER_TAG_PREFIX:"dev"
HOST_PROVIDER:"heroku-prod"
Error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'WORKER_TAG_PREFIX'"}
Firstly, your vars file is broken - it requires spaces between : and the values (and you don't need quotes for strings in this example):
global:
WORKER_TAG_PREFIX: dev
HOST_PROVIDER: heroku-prod
The above is the reason for the included error, but then the code has also syntax error which should be thrown first:
The correct syntax to include vars files at the play level is to define a vars_files key containing a list of files:
- hosts: localhost
gather_facts: True
remote_user: root
vars_files:
- site_vars.yml
tasks:
# ...
On the other hand, include_vars is a module (action) name for a task.
If you wanted to use it, you should add it to the tasks list:
- hosts: localhost
gather_facts: True
remote_user: root
tasks:
- include_vars: site_vars.yml
I am trying to do the following:
define appropriate host roles in hostvars
create a role to call ONLY the roles that relate to specific host and have been defined in a variable in hostvars
Is there a way to do this?
eg:
host_vars/hostname_one/mail.yml
roles_to_install:
- role_one
- role_two
- ...
run_all_roles.yml
---
- hosts: '{{ TARGET }}'
become: yes
...
roles:
- { role: "roles_to_install"}
Obviously this does not work.
Is there a way to make ansible-playbook -i <hosts_file> run_all_roles.yml -e "TARGET=hostname_one" to run?
This is not how you should be approaching your roles and inventories.
Instead, if you put your hosts in the inventory in appropriate groups you can use the hosts parameter of the playbook to drive what gets installed where.
For example I might have a typical web application that is running on NGINX with some application specific things (such as a Python environment), but is also fronted by some NGINX servers that may serve static content and there could also be a typical database.
My inventory might then look like this:
[frontend-web-nodes]
web-1.example.org
web-2.example.org
[application-nodes]
app-1.example.org
app-2.example.org
[database-nodes]
database.example.org
Now, I can create a playbook for my database role that goes and installs some database and configures and set hosts: database-nodes to make sure the play (and so the role(s) that it runs only targets the database.example.org box.
So something like this:
- name: database
hosts: database-nodes
roles:
- database
For my frontend and application web nodes I have a shared dependency on installing and configuring NGINX but my application servers also need some other things. So my front end web nodes can be configured with a simple play like this:
- name: frontend-web
hosts: frontend-web-nodes
roles:
- nginx
While for my application nodes I might either have something like this:
- name: application
hosts: application-nodes
roles:
- nginx
- application
Or I could just do this:
- name: application
hosts: application-nodes
roles:
- application
And in my roles/application/meta/main.yml define a dependency on the nginx role:
dependencies:
- role: nginx
As I commented the solution was easier than expected:
---
- hosts: '{{ TARGET }}'
become: yes
vars_files:
- ./vars/main.yml
roles:
- { role: "roleA", when: "'roleA' in roles_to_install" }
- { role: "roleB", when: "'roleB' in roles_to_install" }
...
Assuming that a correct roles_to_install var is defined inside host_vars/$fqdn/main.yml like so:
---
roles_to_install:
- roleA
- roleB
- ...
Thank you for you assistance guys
What about this:
playfile.yml:
- hosts: all
tasks:
- when: host_roles is defined
include_role:
name: "{{ role_item }}"
loop: "{{ host_roles }}"
loop_control:
loop_var: role_item
hostvars_file.yml:
host_roles:
- name: myrole1
myrole1_var1: "myrole1_value1"
myrole1_var2: "myrole1_value2"
- name: myrole2
myrole2_var1: "myrole2_value1"
myrole2_var2: "myrole2_value2"
but then your hostvar_roles would be run during task-execution, normally roles will be executed before tasks.
Alternativly why dont have a role for this:
roles/ansible.hostroles/tasks/main.yml:
---
# tasks file for ansible.hostroles
- when: host_roles is defined
include_role:
name: "{{ role_item }}"
loop: "{{ host_roles }}"
loop_control:
loop_var: role_item
playfile.yml:
- hosts: all
roles:
- ansible.hostroles
I've got a strange behavior of Ansible "copy" module when it is working with variables.
So, I have:
1. Config.yml:
- hosts: temp
vars_prompt:
- name: server_name
prompt: "Enter server number: 1, 2, 3..."
private: no
default: 5
- name: server_role
prompt: "Enter server role: app, admin"
private: no
default: admin
- name: server_type
prompt: "Enter server type: stage, prod"
private: no
default: stage
pre_tasks:
- name: Types and roles
set_fact:
servername: "{{ server_name }}"
serverrole: "{{ server_role }}"
servertype: "{{ server_type }}"
vars_files:
- "vars/variables"
roles:
- configs
"Configs" role with main.yml:
---
- set_fact: folder=server
when: serverrole == "app"
- set_fact: folder=admin-server
when: serverrole == "admin"
- set_fact: stageorprod=stage01
when: servertype == "stage"
- set_fact: stageorprod=prod
when: servertype == "prod"
- set_fact: fast={{ stageorprod }}/{{ folder }}/{{ servername }}
- name: Base copying admin-server
copy: src=admin-server/config dest=/home/tomcat/config/{{ fast }}/
when: serverrole == "admin"
Config files in ansible/roles/configs/files/admin-server/config.
When I run playbook with default values of variables (5, admin, stage), I've got:
TASK: [configs | set_fact fast={{stageorprod}}/{{folder}}/{{servername}}] *****
ok: [testcen04] => {"ansible_facts": {"fast": "stage01/admin-server/5"}, "item": ""}
TASK: [configs | Base copying admin-server] ***********************************
failed: [testcen04] => {"failed": true, "item": "", "md5sum": "cb2547d6235c078cfda365a5fb3c27c3",
"path": "/home/tomcat/config/stage01/admin-server/config", "state": "absent"}
msg: path /home/tomcat/config/stage01/admin-server/config does not exist
When I run this task one more time with same values, everything goes ok. But if I change some variable, it appears again.
I have noticed, that other modules, like "Template", works fine in same playbook with this variables. Maybe something wrong with "copy"?
As you can see, variable "fast" gets right values, but somehow, value of "servername" disappeared.
Your question is quite ambiguous to how you are executing your playbook and what you are trying to accomplish with the prompted variables. If you are trying to spin up servers, it's likely better to declare them in the inventory without prompting. If you are trying to access specific ones, you should likely be using groups to limit their range.
If you are using ansible to generate a set of hosts for you. You will likely want to store this information somewhere consistently, probably in a instance tags, a key-value store such as redis, database, or in files, before you spin up the hosts and bootstrap them. Then run a second playbook to include the role.
If you are not in a public cloud, and for some reason cannot tag instances or group them inventory, you can also try using facts.d to set the facts on the server and have them persist across runs, not just plays. Note that once you write to facts.d files, you should re-run setup module to gather facts again. Even though i use public cloud, I often make use of facts.d still.